473 research outputs found

    Behavioral estimates of interhemispheric transmission time and the signal detection method: A reappraisal

    Get PDF
    On the basis of a review of the literature, Bashore (1981) concluded that only simple reaction time experiments with manual responses yielded consistent behavioral estimates of interhemispheric transmission time. A closer look at the data, however, revealed that these experiments were the only ones in which large numbers of observations were invariably obtained from many subjects. To investigate whether the methodological flaw was the origin of Bashore’s conclusion, two experiments were run in which subjects had to react to lateralized light flashes. The first experiment dealt with manual reactions, the second with verbal reactions. Each experiment included a condition without catch trials (i.e., simple reaction time) and two conditions with catch trials. Catch trials were trials in which no stimulus was given and in which the response was to be withheld. Both experiments returned consistent estimates of interhemispheric transmission time in the range of 2–3 msec. No differences were found between the simple reaction time condition and the signal detection conditions with catch trials. Data were analyzed according to the variable criterion theory. This showed that the effect of catch trials, as well as the effect of interhemispheric transmission, was situated at the height of the detection criterion, and not in the rate of the information transmission

    Interhemispheric transfer and the processing of foveally presented stimuli

    Get PDF
    A review of the literature shows that the LVF and the RVF do not overlap. This means that foveal representations of words are effectively split and that interhemispheric communication is needed to recognise centrally presented words

    How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables

    Get PDF
    Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant

    Algorithms for randomness in the behavioral sciences: A tutorial

    Get PDF
    Simulations and experiments frequently demand the generation of random numbera that have specific distributions. This article describes which distributions should be used for the most cammon problems and gives algorithms to generate the numbers.It is also shown that a commonly used permutation algorithm (Nilsson, 1978) is deficient

    The enlightenment

    Get PDF
    The term Enlightenment is used to refer to intellectual and social developments in the 18th century. The underlying force was an increasing belief in the scientific approach to attaining knowledge, as opposed to the medieval reliance on religion and tradition. The more successful science became, the more intellectuals began to see the scientific method as a way to organize society. With respect to clinical psychology, the Enlightenment was characterized by a new approach to mental illness and the treatment of the mentally ill, who became seen as ailing patients in need of help in specialized institutions

    Combining speed and accuracy in cognitive psychology: is the Inverse Efficiency Score (IES) a better dependent variable than the mean Reaction Time (RT) and the Percentage of Errors (PE)?

    Get PDF
    Experiments in cognitive psychology usually return two dependent variables: the percentage of errors and the reaction time of the correct responses. Townsend and Ashby (1978, 1983) proposed the inverse efficiency score (IES) as a way to combine both measures and, hence, to provide a better summary of the findings. In this article we examine the usefulness of IES by applying it to existing datasets. Although IES does give a better summary of the findings in some cases, mostly the variance of the measure is increased to such an extent that it becomes less interesting. Against our initial hopes, we have to conclude that it is not a good idea to limit the statistical analyses to IES without further checking the data

    Word skipping: implications for theories of eye movement control in reading

    Get PDF
    This chapter provides a meta-analysis of the factors that govern word skipping in reading. It is concluded that the primary predictor is the length of the word to be skipped. A much smaller effect is due to the processing ease of the word (e.g., the frequency of the word and its predictability in the sentence)

    Moving beyond Kucera and Francis: a critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English

    Get PDF
    Word frequency is the most important variable in research on word processing and memory. Yet, the main criterion for selecting word frequency norms has been the availability of the measure, rather than its quality. As a result, much research is still based on the old Kucera and Francis frequency norms. By using the lexical decision times of recently published megastudies, we show how bad this measure is and what must be done to improve it. In particular, we investigated the size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure. We observed that corpus size is of practical importance for small sizes (depending on the frequency of the word), but not for sizes above 16-30 million words. As for the language register, we found that frequencies based on television and film subtitles are better than frequencies based on written sources, certainly for the monosyllabic and bisyllabic words used in psycholinguistic research. Finally, we found that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence. Part of the superiority of the latter is due to the words that are frequently used as names. Assembling a new frequency norm on the basis of these considerations turned out to predict word processing times much better than did the existing norms (including Kucera & Francis and Celex). The new SUBTL frequency norms from the SUBTLEXUS corpus are freely available for research purposes from http://brm.psychonomic-journals.org/content/supplemental, as well as from the University of Ghent and Lexique Web sites

    A validated set of tool pictures with matched objects and non-objects for laterality research

    Get PDF
    Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article
    • …
    corecore